Goto

Collaborating Authors

 ethical use


Ethical Challenges and Evolving Strategies in the Integration of Artificial Intelligence into Clinical Practice

Weiner, Ellison B., Dankwa-Mullan, Irene, Nelson, William A., Hassanpour, Saeed

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has rapidly transformed various sectors, including healthcare, where it holds the potential to revolutionize clinical practice and improve patient outcomes. However, its integration into medical settings brings significant ethical challenges that need careful consideration. This paper examines the current state of AI in healthcare, focusing on five critical ethical concerns: justice and fairness, transparency, patient consent and confidentiality, accountability, and patient-centered and equitable care. These concerns are particularly pressing as AI systems can perpetuate or even exacerbate existing biases, often resulting from non-representative datasets and opaque model development processes. The paper explores how bias, lack of transparency, and challenges in maintaining patient trust can undermine the effectiveness and fairness of AI applications in healthcare. In addition, we review existing frameworks for the regulation and deployment of AI, identifying gaps that limit the widespread adoption of these systems in a just and equitable manner. Our analysis provides recommendations to address these ethical challenges, emphasizing the need for fairness in algorithm design, transparency in model decision-making, and patient-centered approaches to consent and data privacy. By highlighting the importance of continuous ethical scrutiny and collaboration between AI developers, clinicians, and ethicists, we outline pathways for achieving more responsible and inclusive AI implementation in healthcare. These strategies, if adopted, could enhance both the clinical value of AI and the trustworthiness of AI systems among patients and healthcare professionals, ensuring that these technologies serve all populations equitably.


Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control

Hobbs, Kerianne L., Li, Bernard

arXiv.org Artificial Intelligence

Designing a safe, trusted, and ethical AI may be practically impossible; however, designing AI with safe, trusted, and ethical use in mind is possible and necessary in safety and mission-critical domains like aerospace. Safe, trusted, and ethical use of AI are often used interchangeably; however, a system can be safely used but not trusted or ethical, have a trusted use that is not safe or ethical, and have an ethical use that is not safe or trusted. This manuscript serves as a primer to illuminate the nuanced differences between these concepts, with a specific focus on applications of Human-AI teaming in aerospace system control, where humans may be in, on, or out-of-the-loop of decision-making.


Viral 'AI yearbook' trend allows people to create nostalgic high-school photos from any era

FOX News

Fox News correspondent Grady Trimble has the latest on fears the technology will spiral out of control on'Special Report.' Now everyone can be the coolest kid in school. Using artificial intelligence, people are turning themselves into nostalgic high school personas in a new and already-viral "AI yearbook" trend. Social media users have tried out AI-powered platforms such as the EPIK AI photo editor, which allows people to turn a modern photo of themselves into what looks like a yearbook portrait from any era (the 1990s has been the most popular). TikTok influencers such as Olivia Dunne and Joe Mele have also used these apps to transform themselves into classic high-school tropes like the jock, the nerd, the rebel, the popular kid and other stereotypes.


Back to school with AI: How parents and educators can ensure its ethical use in the classroom

FOX News

AI technology is quickly creeping into every industry, prompting new questions about whether online content comes from a human or a computer. The presence of advanced technology in the classroom may require conversations with students during this new school year. As artificial intelligence finds its way into more families' day-to-day routines, parents and teachers alike should be wary of how their kids are interacting with generative AI. This is according to SmartNews' head of trust and safety Arjun Narayan, who shared concerns during an interview with Fox News Digital. "As with any new technology, when it is very new, it's important to understand how you're engaging with that tech," said Narayan, who is based in Japan.


Artificial Intelligence AI Security - Hackers Online Club (HOC)

#artificialintelligence

In childhood, we used to write an essay on "Science is a miracle as well as a curse." Today, Artificial intelligence (A.I.) has changed the way we live, work, and communicate. Many industries have been transformed through it, like I.T., healthcare, finance, transportation, and manufacturing. The need for A.I. security has become more critical as it keeps evolving and becoming more sophisticated. A.I. can make our lives easier but can also be a cyber threat if misused.


Ethical Use of AI in Insurance Modeling and Decision-Making

#artificialintelligence

With increased availability of next-generation technology and data mining tools, insurance company use of external consumer data sets and artificial intelligence (AI) and machine learning (ML)-enabled analytical models is rapidly expanding and accelerating. Insurers have initially targeted key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technical innovations to realize enhanced risk management, revenue growth and improved profitability. At the same time, regulators worldwide are intensifying their focus on the governance and fairness challenges presented by these complex, highly innovative tools – specifically, the potential for unintended bias against protected classes of people. In the United States, the Colorado Division of Insurance recently issued a first-in-the-nation draft regulation to support the implementation of a 2021 law passed by the state's legislature.1 This law (SB21-169) prohibits life insurers from using external personal data and information sources (ECDIS), or employing algorithms and models that use ECDIS, where the resulting impact of such use is unfair discrimination against consumers on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.2


Microsoft ends AI ethics team while getting closer to OpenAI • The Register

#artificialintelligence

Microsoft has eliminated its entire team responsible for ensuring the ethical use of AI software at a time when the Windows giant is ramping up its use of machine learning technology. The decision to ditch the ethics and society team within its artificial intelligence organization is part of the 10,000 job cuts Microsoft announced in January, which will continue rolling through the IT titan into next year. The hit to this particular unit may remove some guardrails meant to ensure Microsoft's products that integrate machine learning features meet the mega-corp's standards for ethical use of AI. And it comes as discussion rages about the effects of controversial artificial intelligence models on society at large. Baking AI ethics into the whole business – as something for all employees to consider – seems kinda like when Bill Gates told his engineers in 2002 to make security an organization-wide priority, which obviously went really well.


Actuaries highlight need for ethical use of AI in insurance - Reinsurance News

#artificialintelligence

While artificial intelligence (AI) promises faster and smarter decision making, the Actuaries Institute and the Australian Human Rights Commission (AHRC) worry about potential discrimination and highlight the need to prevent this. To address the issue, they created a Guidance Resource designed to help insurers and actuaries to comply with the federal anti-discrimination legislation when AI is used in pricing or underwriting insurance products. The guidance was developed after a 2021 report by the AHRC that looked at the human rights impacts of new and emerging technologies, including AI-informed decision making. The Actuaries Institute strongly supported the report's recommendations to develop a set guidelines for use by the government and non-government organisations on complying with federal antidiscrimination laws when AI has been used in decision making. It approached the AHRC with a collaboration offer and together they developed these guidelines.


Artificial intelligence: 3 tips to ensure responsible and ethical use

#artificialintelligence

Artificial intelligence (AI) already impacts our daily lives in ways we never imagined just a few years ago – and in ways that we're unaware of now. From self-driving cars to voice-assisted devices to predictive text messaging, AI has become a necessary and unavoidable part of our society, including in the workplace. Data shows that the use of AI in business is increasing. In 2019, a Gartner report stated that 37% of organizations had implemented AI in some capacity. Most recently, Gartner predicted that the global AI software market would be worth $62.5 billion by the end of this year, a 21% jump from the previous year.


UF supports the ethical use of artificial intelligence

#artificialintelligence

The University of Florida, a proponent for ethics in artificial intelligence, is part of a new global agreement with seven other worldwide universities that are committed to the development of human-centered approaches to artificial intelligence (AI) that will impact people everywhere. During the Global University Summit at Notre Dame University, Joseph Glover, UF provost and senior vice president of academic affairs, signed The Rome Call for AI Ethics on October 27 on behalf of the University of Florida and served as a panelist for the two-day summit attended by 36 universities invited from around the world. The event was held in Notre Dame, IN. The signing indicates a commitment to the principles of the Rome Call for AI Ethics: to ensure artificial intelligence serves the interests of humanity and to support regulations and principles to deliver emerging technologies that are ethically centered. UF joins a network of universities that will share best practices, tools, and educational content, as well as meet regularly to share updates and discuss innovative ideas.